Skip to content

111 all to all alignment#112

Draft
geoffwoollard wants to merge 21 commits intodevfrom
111-all-to-all-alignment
Draft

111 all to all alignment#112
geoffwoollard wants to merge 21 commits intodevfrom
111-all-to-all-alignment

Conversation

@geoffwoollard
Copy link
Copy Markdown
Collaborator

@geoffwoollard geoffwoollard commented Feb 20, 2025

return idx_i, idx_j, None, None, None, None


def mp_main(args):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

have to downsample to avoid shared memory issues...

return torch.norm(a - b)


def pairwise_norm(a_block, b, rotations_block):
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

shared memory...?

Comment on lines +6 to +7
#SBATCH -n 1
#SBATCH -c 64
Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

64 cpus?

a_row = a_block[idx] # Shape: (k,)
rotations_row = rotations_block[idx] # Shape: (m, 3, 3)
# Vectorize the custom distance function using torch.vmap
batched_custom_distance = torch.vmap(align_and_distance, in_dims=(None, 0, 0))
Copy link
Copy Markdown
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

this should be defined outside the loop, preferably outside this function

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants